56 research outputs found
Open Source Software for Automatic Detection of Cone Photoreceptors in Adaptive Optics Ophthalmoscopy Using Convolutional Neural Networks
Imaging with an adaptive optics scanning light ophthalmoscope (AOSLO) enables direct visualization of the cone photoreceptor mosaic in the living human retina. Quantitative analysis of AOSLO images typically requires manual grading, which is time consuming, and subjective; thus, automated algorithms are highly desirable. Previously developed automated methods are often reliant on ad hoc rules that may not be transferable between different imaging modalities or retinal locations. In this work, we present a convolutional neural network (CNN) based method for cone detection that learns features of interest directly from training data. This cone-identifying algorithm was trained and validated on separate data sets of confocal and split detector AOSLO images with results showing performance that closely mimics the gold standard manual process. Further, without any need for algorithmic modifications for a specific AOSLO imaging system, our fully-automated multi-modality CNN-based cone detection method resulted in comparable results to previous automatic cone segmentation methods which utilized ad hoc rules for different applications. We have made free open-source software for the proposed method and the corresponding training and testing datasets available online
SpectralDiff: A Generative Framework for Hyperspectral Image Classification with Diffusion Models
Hyperspectral Image (HSI) classification is an important issue in remote
sensing field with extensive applications in earth science. In recent years, a
large number of deep learning-based HSI classification methods have been
proposed. However, existing methods have limited ability to handle
high-dimensional, highly redundant, and complex data, making it challenging to
capture the spectral-spatial distributions of data and relationships between
samples. To address this issue, we propose a generative framework for HSI
classification with diffusion models (SpectralDiff) that effectively mines the
distribution information of high-dimensional and highly redundant data by
iteratively denoising and explicitly constructing the data generation process,
thus better reflecting the relationships between samples. The framework
consists of a spectral-spatial diffusion module, and an attention-based
classification module. The spectral-spatial diffusion module adopts forward and
reverse spectral-spatial diffusion processes to achieve adaptive construction
of sample relationships without requiring prior knowledge of graphical
structure or neighborhood information. It captures spectral-spatial
distribution and contextual information of objects in HSI and mines
unsupervised spectral-spatial diffusion features within the reverse diffusion
process. Finally, these features are fed into the attention-based
classification module for per-pixel classification. The diffusion features can
facilitate cross-sample perception via reconstruction distribution, leading to
improved classification performance. Experiments on three public HSI datasets
demonstrate that the proposed method can achieve better performance than
state-of-the-art methods. For the sake of reproducibility, the source code of
SpectralDiff will be publicly available at
https://github.com/chenning0115/SpectralDiff
Classification of hyperspectral images by exploiting spectral-spatial information of superpixel via multiple kernels
For the classification of hyperspectral images (HSIs), this paper presents a novel framework to effectively utilize the spectral-spatial information of superpixels via multiple kernels, termed as superpixel-based classification via multiple kernels (SC-MK). In HSI, each superpixel can be regarded as a shape-adaptive region which consists of a number of spatial-neighboring pixels with very similar spectral characteristics. Firstly, the proposed SC-MK method adopts an over-segmentation algorithm to cluster the HSI into many superpixels. Then, three kernels are separately employed for the utilization of the spectral information as well as spatial information within and among superpixels. Finally, the three kernels are combined together and incorporated into a support vector machines classifier. Experimental results on three widely used real HSIs indicate that the proposed SC-MK approach outperforms several well-known classification methods
RRNet: Relational Reasoning Network with Parallel Multi-scale Attention for Salient Object Detection in Optical Remote Sensing Images
Salient object detection (SOD) for optical remote sensing images (RSIs) aims
at locating and extracting visually distinctive objects/regions from the
optical RSIs. Despite some saliency models were proposed to solve the intrinsic
problem of optical RSIs (such as complex background and scale-variant objects),
the accuracy and completeness are still unsatisfactory. To this end, we propose
a relational reasoning network with parallel multi-scale attention for SOD in
optical RSIs in this paper. The relational reasoning module that integrates the
spatial and the channel dimensions is designed to infer the semantic
relationship by utilizing high-level encoder features, thereby promoting the
generation of more complete detection results. The parallel multi-scale
attention module is proposed to effectively restore the detail information and
address the scale variation of salient objects by using the low-level features
refined by multi-scale attention. Extensive experiments on two datasets
demonstrate that our proposed RRNet outperforms the existing state-of-the-art
SOD competitors both qualitatively and quantitatively.Comment: 11 pages, 9 figures, Accepted by IEEE Transactions on Geoscience and
Remote Sensing 2021, project: https://rmcong.github.io/proj_RRNet.htm
Self-Supervised Learning With Adaptive Distillation for Hyperspectral Image Classification
Hyperspectral image (HSI) classification is an important topic in the community of remote sensing, which has a wide range of applications in geoscience. Recently, deep learning-based methods have been widely used in HSI classification. However, due to the scarcity of labeled samples in HSI, the potential of deep learning-based methods has not been fully exploited. To solve this problem, a self-supervised learning (SSL) method with adaptive distillation is proposed to train the deep neural network with extensive unlabeled samples. The proposed method consists of two modules: adaptive knowledge distillation with spatial-spectral similarity and 3-D transformation on HSI cubes. The SSL with adaptive knowledge distillation uses the self-supervised information to train the network by knowledge distillation, where self-supervised knowledge is the adaptive soft label generated by spatial-spectral similarity measurement. The SSL with adaptive knowledge distillation mainly includes the following three steps. First, the similarity between unlabeled samples and object classes in HSI is generated based on the spatial-spectral joint distance (SSJD) between unlabeled samples and labeled samples. Second, the adaptive soft label of each unlabeled sample is generated to measure the probability that the unlabeled sample belongs to each object class. Third, a progressive convolutional network (PCN) is trained by minimizing the cross-entropy between the adaptive soft labels and the probabilities generated by the forward propagation of the PCN. The SSL with 3-D transformation rotates the HSI cube in both the spectral domain and the spatial domain to fully exploit the labeled samples. Experiments on three public HSI data sets have demonstrated that the proposed method can achieve better performance than existing state-of-the-art methods
- …